Àá½Ã¸¸ ±â´Ù·Á ÁÖ¼¼¿ä. ·ÎµùÁßÀÔ´Ï´Ù.
KMID : 1022420210130010045
Phonetics and Speech Sciences
2021 Volume.13 No. 1 p.45 ~ p.51
Hyperparameter experiments on end-to-end automatic speech recognition*
Yang Hyung-Won

Nam Ho-Sung
Abstract
End-to-end (E2E) automatic speech recognition (ASR) has achieved promising performance gains with the introduced self-attention network, Transformer. However, due to training time and the number of hyperparameters, finding the optimal hyperparameter set is computationally expensive. This paper investigates the impact of hyperparameters in the Transformer network to answer two questions: which hyperparameter plays a critical role in the task performance and training speed. The Transformer network for training has two encoder and decoder networks combined with Connectionist Temporal Classification (CTC). We have trained the model with Wall Street Journal (WSJ) SI-284 and tested on devl93 and eval92. Seventeen hyperparameters were selected from the ESPnet training configuration, and varying ranges of values were used for experiments. The result shows that ¡°num blocks¡± and ¡°linear units¡± hyperparameters in the encoder and decoder networks reduce Word Error Rate (WER) significantly. However, performance gain is more prominent when they are altered in the encoder network. Training duration also linearly increased as ¡°num blocks¡± and ¡°linear units¡± hyperparameters¡¯ values grow. Based on the experimental results, we collected the optimal values from each hyperparameter and reduced the WER up to 2.9/1.9 from dev93 and eval93 respectively.
KEYWORD
automatic speech recognition, transformer, neural network, hyperparameters, optimization
FullTexts / Linksout information
Listed journal information
ÇмúÁøÈïÀç´Ü(KCI)